Goto

Collaborating Authors

 Washington County


A CNN-Based Technique to Assist Layout-to-Generator Conversion for Analog Circuits

Jeong, Sungyu, Kim, Minsu, Kim, Byungsub

arXiv.org Artificial Intelligence

We propose a technique to assist in converting a reference layout of an analog circuit into the procedural layout generator by efficiently reusing available generators for sub-cell creation. The proposed convolutional neural network (CNN) model automatically detects sub-cells that can be generated by available generator scripts in the library, and suggests using them in the hierarchically correct places of the generator software. In experiments, the CNN model examined sub-cells of a high-speed wireline receiver that has a total of 4,885 sub-cell instances including different 145 sub-cell designs. The CNN model classified the sub-cell instances into 51 generatable and one not-generatable classes. One not-generatable class indicates that no available generator can generate the classified sub-cell. The CNN model achieved 99.3% precision in examining the 145 different sub-cell designs. The CNN model greatly reduced the examination time to 18 seconds from 88 minutes required in manual examination. Also, the proposed CNN model could correctly classify unfamiliar sub-cells that are very different from the training dataset.


An Agent-Based Framework for the Automatic Validation of Mathematical Optimization Models

Zadorojniy, Alexander, Wasserkrug, Segev, Farchi, Eitan

arXiv.org Artificial Intelligence

Recently, using Large Language Models (LLMs) to generate optimization models from natural language descriptions has became increasingly popular. However, a major open question is how to validate that the generated models are correct and satisfy the requirements defined in the natural language description. In this work, we propose a novel agent-based method for automatic validation of optimization models that builds upon and extends methods from software testing to address optimization modeling . This method consists of several agents that initially generate a problem-level testing API, then generate tests utilizing this API, and, lastly, generate mutations specific to the optimization model (a well-known software testing technique assessing the fault detection power of the test suite). In this work, we detail this validation framework and show, through experiments, the high quality of validation provided by this agent ensemble in terms of the well-known software testing measure called mutation coverage.




Supplementary Material: Interpretable multi-timescale models for predicting fMRI responses to continuous natural speech

Neural Information Processing Systems

Additional subject flatmaps are shown in figures 2-7 at the end of the document. Only significantly predicted voxels are shown. These flatmaps correspond to figures 3-5 in the main text and follow the same colormap. Note that subject S04 is excluded from this study due to poor data quality, resulting in 6 subjects overall. Results from subject S03 (highest number of significant voxels) are shown in the main text.



gACSON software for automated segmentation and morphology analyses of myelinated axons in 3D electron microscopy

Behanova, Andrea, Abdollahzadeh, Ali, Belevich, Ilya, Jokitalo, Eija, Sierra, Alejandra, Tohka, Jussi

arXiv.org Artificial Intelligence

Background and Objective: Advances in electron microscopy (EM) now allow three-dimensional (3D) imaging of hundreds of micrometers of tissue with nanometer-scale resolution, providing new opportunities to study the ultra-structure of the brain. In this work, we introduce a freely available Matlab-based gACSON software for visualization, segmentation, assessment, and morphology analysis of myelinated axons in 3D-EM volumes of brain tissue samples. Methods: The software is equipped with a graphical user interface (GUI). It automatically segments the intra-axonal space of myelinated axons and their corresponding myelin sheaths and allows manual segmentation, proofreading, and interactive correction of the segmented components. Results: We illustrate the use of the software by segmenting and analyzing myelinated axons in six 3D-EM volumes of rat somatosensory cortex after sham surgery or traumatic brain injury (TBI). Our results suggest that the equivalent diameter of myelinated axons in somatosensory cortex was decreased in TBI animals five months after the injury. Conclusions: Our results indicate that gACSON is a valuable tool for visualization, segmentation, assessment, and morphology analysis of myelinated axons in 3D-EM volumes. Introduction Assessing the structure of the brain is critical to better understanding its normal and abnormal functioning. Advances in electron microscopy (EM) now allow three-dimensional (3D) imaging of hundreds of micrometers of tissue with nanometer-scale resolution, providing new opportunities to study the ultrastructure of the brain [1, 2]. Quantitative analysis of 3D-EM data, such as morphological assessment of ultrastructure, spatial distribution or connectivity of cells, requires the instance segmentation of individual ultrastructural components [3, 4, 5]. Performing this segmentation manually is tedious, if not impossible, due to the large size and enormous number of components in typical 3D-EM data.


LogHD: Robust Compression of Hyperdimensional Classifiers via Logarithmic Class-Axis Reduction

Yun, Sanggeon, Oh, Hyunwoo, Masukawa, Ryozo, Mercati, Pietro, Bastian, Nathaniel D., Imani, Mohsen

arXiv.org Artificial Intelligence

Hyperdimensional computing (HDC) suits memory, energy, and reliability-constrained systems, yet the standard "one prototype per class" design requires $O(CD)$ memory (with $C$ classes and dimensionality $D$). Prior compaction reduces $D$ (feature axis), improving storage/compute but weakening robustness. We introduce LogHD, a logarithmic class-axis reduction that replaces the $C$ per-class prototypes with $n\!\approx\!\lceil\log_k C\rceil$ bundle hypervectors (alphabet size $k$) and decodes in an $n$-dimensional activation space, cutting memory to $O(D\log_k C)$ while preserving $D$. LogHD uses a capacity-aware codebook and profile-based decoding, and composes with feature-axis sparsification. Across datasets and injected bit flips, LogHD attains competitive accuracy with smaller models and higher resilience at matched memory. Under equal memory, it sustains target accuracy at roughly $2.5$-$3.0\times$ higher bit-flip rates than feature-axis compression; an ASIC instantiation delivers $498\times$ energy efficiency and $62.6\times$ speedup over an AMD Ryzen 9 9950X and $24.3\times$/$6.58\times$ over an NVIDIA RTX 4090, and is $4.06\times$ more energy-efficient and $2.19\times$ faster than a feature-axis HDC ASIC baseline.


Scalable GPU-Based Integrity Verification for Large Machine Learning Models

Spoczynski, Marcin, Melara, Marcela S.

arXiv.org Artificial Intelligence

We present a security framework that strengthens distributed machine learning by standardizing integrity protections across CPU and GPU platforms and significantly reducing verification overheads. Our approach co-locates integrity verification directly with large ML model execution on GPU accelerators, resolving the fundamental mismatch between how large ML workloads typically run (primarily on GPUs) and how security verifications traditionally operate (on separate CPU-based processes), delivering both immediate performance benefits and long-term architectural consistency. By performing cryptographic operations natively on GPUs using dedicated compute units (e.g., Intel Arc's XMX units, NVIDIA's Tensor Cores), our solution eliminates the potential architectural bottlenecks that could plague traditional CPU-based verification systems when dealing with large models. This approach leverages the same GPU-based high-memory bandwidth and parallel processing primitives that power ML workloads ensuring integrity checks keep pace with model execution even for massive models exceeding 100GB. This framework establishes a common integrity verification mechanism that works consistently across different GPU vendors and hardware configurations. By anticipating future capabilities for creating secure channels between trusted execution environments and GPU accelerators, we provide a hardware-agnostic foundation that enterprise teams can deploy regardless of their underlying CPU and GPU infrastructures.


Nike's Robotic Shoe Gets Humans One Step Closer to Cyborg

WIRED

Nike's Robotic Shoe Gets Humans One Step Closer to Cyborg Project Amplify is Nike's latest attempt to put some spring in your step with help from a powered mechanism that enhances the natural movement of the human ankle and lower leg. If you want to run faster or farther, you have options. You can put in the work, getting up 40 minutes earlier to train, changing your diet, going harder and longer on each of your runs to build up strength. Or, you can strap on one of Nike's new robot shoes and mechanically boost your speed, your stamina, and your overall performance in a flash. Sounds way easier, and probably more fun too.